閉める
閉める
明日に向けたネットワーク
明日に向けたネットワーク
サポートするアプリケーションとユーザー向けに設計された、より高速で、より安全で、回復力のあるネットワークへの道を計画します。
          Netskopeを体験しませんか?
          Netskopeプラットフォームを実際に体験する
          Netskope Oneのシングルクラウドプラットフォームを直接体験するチャンスです。自分のペースで進められるハンズオンラボにサインアップしたり、毎月のライブ製品デモに参加したり、Netskope Private Accessの無料試乗に参加したり、インストラクター主導のライブワークショップに参加したりできます。
            SSEのリーダー。 現在、シングルベンダーSASEのリーダーです。
            SSEのリーダー。 現在、シングルベンダーSASEのリーダーです。
            Netskope、2024年ガートナー、シングルベンダーSASEのマジック・クアドラントでリーダーの1社の位置付けと評価された理由をご確認ください。
              ダミーのためのジェネレーティブAIの保護
              ダミーのためのジェネレーティブAIの保護
              ジェネレーティブ AI の革新的な可能性と堅牢なデータ セキュリティ プラクティスのバランスを取る方法をご覧ください。
                ダミーのための最新のデータ損失防止(DLP)eBook
                最新の情報漏えい対策(DLP)for Dummies
                クラウド配信型 DLP に移行するためのヒントとコツをご紹介します。
                  SASEダミーのための最新のSD-WAN ブック
                  SASEダミーのための最新のSD-WAN
                  遊ぶのをやめる ネットワークアーキテクチャに追いつく
                    リスクがどこにあるかを理解する
                    Advanced Analytics は、セキュリティ運用チームがデータ主導のインサイトを適用してより優れたポリシーを実装する方法を変革します。 Advanced Analyticsを使用すると、傾向を特定し、懸念事項に的を絞って、データを使用してアクションを実行できます。
                        レガシーVPNを完全に置き換えるための6つの最も説得力のあるユースケース
                        レガシーVPNを完全に置き換えるための6つの最も説得力のあるユースケース
                        Netskope One Private Accessは、VPNを永久に廃止できる唯一のソリューションです。
                          Colgate-Palmoliveは、スマートで適応性のあるデータ保護により「知的財産」を保護します
                          Colgate-Palmoliveは、スマートで適応性のあるデータ保護により「知的財産」を保護します
                            Netskope GovCloud
                            NetskopeがFedRAMPの高認証を達成
                            政府機関の変革を加速するには、Netskope GovCloud を選択してください。
                              一緒に素晴らしいことをしましょう
                              Netskopeのパートナー中心の市場開拓戦略により、パートナーは企業のセキュリティを変革しながら、成長と収益性を最大化できます。
                                Netskopeソリューション
                                Netskope Cloud Exchange
                                Netskope Cloud Exchange(CE)は、セキュリティ体制全体で投資を活用するための強力な統合ツールをお客様に提供します。
                                  Netskopeテクニカルサポート
                                  Netskopeテクニカルサポート
                                  クラウドセキュリティ、ネットワーキング、仮想化、コンテンツ配信、ソフトウェア開発など、多様なバックグラウンドを持つ全世界にいる有資格のサポートエンジニアが、タイムリーで質の高い技術支援を行っています。
                                    Netskopeの動画
                                    Netskopeトレーニング
                                    Netskopeのトレーニングは、クラウドセキュリティのエキスパートになるためのステップアップに活用できます。Netskopeは、お客様のデジタルトランスフォーメーションの取り組みにおける安全確保、そしてクラウド、Web、プライベートアプリケーションを最大限に活用するためのお手伝いをいたします。

                                      AI Assistants: A New Challenge for CISOs

                                      Mar 27 2024

                                      Over the past year, AI innovation has swept through the workplace. Across industries and all team functions, we are seeing employees using AI assistants to streamline various tasks, including taking minutes, writing emails, developing code, crafting marketing strategies and even helping with managing company finances. As a CISO, I’m already envisaging an AI assistant which will help me with compliance strategy by actively monitoring regulatory changes, evaluating an organisation’s compliance status, and identifying areas for improvement. 

                                      However, amidst all this enthusiasm, there is a very real challenge facing CISOs and DPOs: how to protect corporate data and IP from leakage through these generative AI platforms and to third party providers.

                                      Curiosity won’t kill the CISO

                                      While many enterprises have contemplated completely blocking these tools on their systems, to do so could limit innovation, create a culture of distrust toward those in the workforce, or even lead to “Shadow AI”—the unapproved use of third-party AI applications outside of the corporate network. To a certain extent, the horse has already bolted. Data shows that within enterprises, AI assistants are already integrated into day-to-day tasks. Writing assistant Grammarly, the second most popular generative AI app, is currently used by 3.1% of employees, and I’ve noticed around a third of the conference calls I attend now have an AI assistant on the guest list. With the increasing availability of AI assistants like Microsoft Copilot Motion, the researchers at Netskope Threat Labs are clear that they expect AI assistants to grow in popularity in 2024.

                                      Instead of blocking the tools outright, CISOs can deploy continuous protection policies using intelligent Data Loss Prevention (DLP) tools to safely use AI applications. DLP tools can ensure no sensitive information is used within input queries to AI applications, protecting critical data and preventing unauthorised access, leaks, or misuse. 

                                      CISOs should also take an active role in evaluating the applications used by employees, restricting access to those that do not align with business needs or pose an undue risk.

                                      Once a CISO identifies an AI assistant as relevant to their organisation, the next step involves vetting the vendor and assessing its data policies. During this process, CISOs should equip themselves with an extensive list of questions, including: 

                                      1. Data handling practices: What becomes of the data an employee inputs?

                                        Understanding how the vendor manages and protects the data is crucial for ensuring data privacy and security. A study by The World Economic Forum found that a staggering 95% of cybersecurity incidents stem from human error–and entrusting sensitive data to a third-party AI assistant can exacerbate this risk. 

                                        There’s even greater cause for pause; by feeding data into these tools, organisations may be inadvertently contributing to the training of potentially competitive AI models. This can lead to a scenario where proprietary information or insights about the organisation’s operations can be leveraged by competitors, posing significant risks to the organisation’s competitive advantage and market position.
                                      1. Is the model used for additional services privately or publicly? Is the model developed by the company itself or based upon a third-party solution? 

                                        Many AI assistant apps used by employees often depend on third-party and fourth-party services. It’s common for employees to use apps without being aware that the backend infrastructure operates on a publicly accessible platform. As CISOs, we are particularly mindful of the significant costs associated with AI technology and so we know that free or inexpensive options make their money in other ways—selling data or the AI intelligence that it has contributed towards. In such cases, a thorough examination of the fine print becomes imperative for CISOs to ensure the protection and privacy of sensitive data. 
                                      1. What happens to the output? Are these outputs employed to train subsequent models?

                                        Many AI vendors do not just use the data input to train their models—they also use the data output. This loop creates ever more tangled ways in which the apps could inadvertently expose sensitive company information or lead to copyright infringement—and can be hard to untangle in supply chain data protection planning. 

                                      Looking within

                                      As private enterprises await stronger legislative guidance on AI, it falls on CISOs and DPOs to promote self-regulation and ethical AI practices within their organisations. With the proliferation of AI assistants, it is crucial they act now to evaluate the implications of AI tools in the workplace. Every employee will soon be performing many day-to-day tasks in conjunction with their AI assistants. This should motivate companies to set up internal governance committees not just to evaluate tools and their applications, but to also discuss AI ethics, review processes, and discuss strategy in advance of more widespread adoption and regulation. This is exactly how we are approaching the challenge within the security teams here at Netskope; with an AI governance committee responsible for our AI strategy and who have built the mechanisms to properly inspect emerging vendors and their data processing approaches. 

                                      Employees across all industries and all levels can benefit from an AI assistant, with Bill Gates saying “they will utterly change how we live our lives, online and off.” For CISOs, the key to unlocking their potential starts with responsible governance.

                                      author image
                                      Neil Thacker
                                      Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union GDPR.
                                      Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union GDPR.

                                      Stay informed!

                                      Subscribe for the latest from the Netskope Blog